这项研究旨在开发一种新型的路灯管理系统,该系统由电视电视(CCTV)摄像头安装的计算机视觉技术提供动力,该摄像头允许发光二极管(LED)路灯通过识别行人或车辆的存在,从而自动通过适当的亮度点亮。并在视频中通过语义图像细分在缺席的情况下对路灯进行了颠倒。
translated by 谷歌翻译
在核成像中,有限的分辨率会导致影响图像清晰度和定量准确性的部分体积效应(PVE)。已证明来自CT或MRI的高分辨率解剖信息的部分体积校正(PVC)已被证明是有效的。但是,这种解剖学引导的方法通常需要乏味的图像注册和分割步骤。由于缺乏具有高端CT和相关运动伪像的混合体SPECT/CT扫描仪,因此很难获得准确的分段器官模板,尤其是在心脏SPECT成像中。轻微的错误注册/错误分段将导致PVC后的图像质量严重降解。在这项工作中,我们开发了一种基于深度学习的方法,用于快速心脏SPECT PVC,而无需解剖信息和相关的器官分割。所提出的网络涉及密集连接的多维动态机制,即使网络经过充分训练,也可以根据输入图像对卷积内核进行调整。引入了心脏内血容量(IMBV)作为网络优化的附加临床损失函数。提出的网络表明,使用Technetium-99M标记的红细胞在GE发现NM/CT 570C专用心脏SPECT扫描仪上获得的28个犬类研究表现有希望的表现。这项工作表明,与没有这种机制的同一网络相比,具有密集连接的动态机制的提议网络产生了较高的结果。结果还表明,没有解剖信息的提出的网络可以与解剖学引导的PVC方法产生的图像产生具有统计上可比的IMBV测量的图像,这可能有助于临床翻译。
translated by 谷歌翻译
诸如社交媒体和电子商务等现代网络系统包含在图像和文本中表达的丰富内容。来自多模态的信息可以提高机器学习任务的性能,如分类和推荐。在本文中,我们提出了跨模型注意力对比语言图像预培训(CMA-CLIP),这是一个新的框架,它统一两种类型的跨片状关注,序列明智的关注和模态 - 明智的关注,有效地保险丝来自图像和文本对的信息。序列设计使框架能够捕获图像补丁和文本令牌之间的细粒度的关系,而模态 - 明智的注意力通过与下游任务的相关性重视每个模式。此外,通过添加任务特定的模态 - 明智的关注和多层的感知程序,我们提出的框架能够使用多模态执行多任务分类。我们在主要零售网站产品属性(MRWPA)数据集和两个公共数据集,Food101和Fashion-Gen进行实验。结果表明,CMA-CRIP在MRWPA数据集上的相同精度的预训练和微调剪辑中的平均值为11.9%,在MRWPA数据集中进行多任务分类。它还超越了时尚 - Gen DataSet的最先进的方法,精度为5.5%,实现了Food101数据集的竞争性能。通过详细的烧蚀研究,我们进一步展示了跨模型注意力模块的有效性以及我们的方法对图像和文本输入中的噪声的鲁棒性,这是实践中的共同挑战。
translated by 谷歌翻译
Existing graph contrastive learning methods rely on augmentation techniques based on random perturbations (e.g., randomly adding or dropping edges and nodes). Nevertheless, altering certain edges or nodes can unexpectedly change the graph characteristics, and choosing the optimal perturbing ratio for each dataset requires onerous manual tuning. In this paper, we introduce Implicit Graph Contrastive Learning (iGCL), which utilizes augmentations in the latent space learned from a Variational Graph Auto-Encoder by reconstructing graph topological structure. Importantly, instead of explicitly sampling augmentations from latent distributions, we further propose an upper bound for the expected contrastive loss to improve the efficiency of our learning algorithm. Thus, graph semantics can be preserved within the augmentations in an intelligent way without arbitrary manual design or prior human knowledge. Experimental results on both graph-level and node-level tasks show that the proposed method achieves state-of-the-art performance compared to other benchmarks, where ablation studies in the end demonstrate the effectiveness of modules in iGCL.
translated by 谷歌翻译
单光子发射计算机断层扫描(SPECT)是一种广泛应用的成像方法,用于诊断冠状动脉疾病。从计算机断层扫描(CT)得出的衰减图(U-MAP)用于衰减校正(AC),以提高心脏SPECT的诊断准确性。但是,SPECT和CT是在临床实践中依次获得的,这可能会导致两项扫描之间的误会。卷积神经网络(CNN)是医疗图像注册的强大工具。先前基于CNN的跨模式注册方法直接串联了两个输入模态作为早期特征融合或使用两个单独的CNN模块提取的图像特征,以进行晚期融合。这些方法不能完全提取或融合交叉模式信息。此外,以前尚未对心脏SPECT和CT衍生的U-MAP的深度学习刚性注册进行研究。在本文中,我们提出了一个双分支挤压融合 - 兴奋(DUSFE)模块,用于对心脏SPECT和CT衍生的U-MAP的注册。 Dusfe融合了从多种模态的知识,以重新校准每种模式的通道和空间特征。 Dusfe可以嵌入多个卷积层,以在不同的空间尺寸下实现特征融合。我们使用临床数据的研究表明,嵌入DUSFE的网络比以前的方法产生了较低的注册误差,因此更准确的AC SPECT图像。
translated by 谷歌翻译
本文介绍了Wasserstein对外正规正规化的图形AutoEncoder(Warga),一种隐含的生成算法,直接通过Wassersein指标将节点潜入目标分布的节点潜行分布。所提出的方法已在实际图表中的链路预测和节点聚类的任务中验证,其中WARGA通常优于基于Kullback-Leibler(KL)发散和典型的对抗框架的最先进模型。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译